R version 3.6.1 (2019-07-05) and the R-packages tidyverse [Version 1.2.1], rlang [Version 0.4.1], here [Version 0.1], brms [Version 2.10.0], tidybayes [Version 1.1.0], bayestestR [Version 0.4.0], modelr [Version 0.1.5], ggforce [Version 0.3.1], ggrepel [Version 0.8.1], ggridges [Version 0.5.1], irr [Version 0.84.1], and kableExtra [Version 1.1.0] were used for data preparation, analysis, and presentation.

Models

The data were analysed using Bayesian distributional models in the brms R-package. These models account for the fact that nLEDs are bounded between 0 and 1, with inflated counts at these bounds on a trial-by-trial basis, which results in non-normal distributions. Crucially, in contrast to general linear models and linear mixed effects models these, models do not make predictions outside the possible range of values and accurately capture the larger densities at extreme values. At the time of writing, distributional models of this nature are only available for hierarchical data using the brms R-package, which requires model fitting to be performed using a Bayesian framework. As an additional benefit, Baeysian models do not suffer from the non-convergence associated with modelling complex analyses under a Frequentist framework.

Zero-one Inflated Beta Distributions

The models were fitted using a zero-one inflated Beta distribution, which models the data as a Beta distribution for nLEDs excluding 0 and 1, and a Bernoulli distribution for binary nLEDs of 0 and 1. Thus, predictors in the model can affect four distributional parameters: \(\mu\) (mu), the mean of the nLEDs excluding 0 and 1; \(\phi\) (phi), the precision (i.e. spread) of the nLEDs excluding 0 and 1; \(\alpha\) (alpha; termed zoi - or zero-one inflation in brms) the probability of an nLED of 0 or 1; and \(\gamma\) (gamma; termed coi - or conditional-one inflation in brms), the conditional probability of a 1 given a 0 or 1 has been observed. Larger values for these parameters are associated with (a) higher mean nLEDs in the range exluding 0 and 1, (b) tighter distributions of the nLEDs in the range excluding 0 and 1 (i.e. less variance), (c) more zero-one inflation in nLEDs, and (d) more one-inflation given zero-inflation in nLEDs. Predictors in this model can influence any and all distributional parameters in the model at once. For these models, a logit link is used for the \(\mu\), \(\alpha\), and \(\gamma\) distributional parameters, and a log link is used for the \(\phi\) distributional parameter.

Model Fitting

Model Specification

Three models were fitted in total: (1) assessing performance across conditions during the vocabulary test prior to literacy training; (2) assessing performance across conditions during the testing phase following literacy training; and (3) assessing performance across conditions during the testing phase following literacy training using the vocabulary test performance as a predictor. This latter model was not pre-registered, but instead serves an exploratory purpose to determine whether or not any effect of dialect exposure is mediated by initial performance. In all models, estimates population-level and group-level effects are estimated for all distributional parameters, with group-level effects correlated across all parameters.

The models were described as follows:

  • Vocabulary Test Model: nLEDs are predicted by population-level (fixed) effects of Variety Exposure condition (with four levels: Variety Match, Mismatch, Mismatch Social, and Dialect Learning) Word Type (with two levels: Contrative and Non-contrastive), and the interaction between them, and by group-level (random) effects of random intercepts and slopes of Word Type by participants, and random intercepts and slopes of Variety Exposure by item.

  • Testing Model: nLEDs are predicted by population-level (fixed) effects of Task (with two levels: Reading and Spelling), Variety Exposure condition, and Word Type and the interaction between them, and by group-level (random) effects of random intercepts and slopes of Task and Word Type by participant, and random intercepts and slopes of Variety Exposure by item. Crucially, the interaction between the group-level effects by participant did not include the interaction between them in order to reduce model complexity.

  • Exploratory Covariate Testing Model: nLEDs are predicted by population-level (fixed) effects of mean nLED during the Vocabulary Test, Task, Variety Exposure condition, Word Type, and the interaction between them, and by group-level (random) effects of random intercepts and slopes of Task and Word Type by participant, and random intercepts and slopes of mean nLED during the Vocabulary Test and Variety Exposure by items. Again, the interaction between the group-level effects by participant did not include the interaction between them in order to reduce model complexity.

Model Priors

In all models, the approach was to use weakly informative, regularising priors for fitting. Where models failed to converge, these priors were adjusted, typically placing less prior weight on extreme values.

Here, priors are described first by their expected distribution, and the parameters that define that distribution. For example, a prior of \(\mathcal{N}(0, 1)\) describes a normal distribution with a mean of 0 and a standard deviation of 1. Similarly, a prior of \(\mathcal{logistic}(0, 1)\) describes a logistic distribution with a mean of 0 and a standard deviation of 1. Note, by default, brms restricts priors on the SD to be positive.

The following priors were used for the exposure model:

  • Intercept
    • \(\mu\): \(\mathcal{N}(0, 5)\)
    • \(\phi\): \(\mathcal{N}(0, 3)\)
    • \(\alpha\): \(\mathcal{logistic}(0, 1)\)
    • \(\gamma\): \(\mathcal{logistic}(0, 1)\)
  • Slope
    • \(\mu\): \(\mathcal{N}(0, 0.5)\)
    • \(\phi\): \(\mathcal{N}(0, 1)\)
    • \(\alpha\): \(\mathcal{N}(0, 5)\)
    • \(\gamma\): \(\mathcal{N}(0, 0.5)\)
  • SD
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 1)\)
    • \(\alpha\): \(\mathcal{N}(0, 5)\)
    • \(\gamma\): \(\mathcal{N}(0, 5)\)
  • SD by Participant Number
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 5)\)
    • \(\alpha\): \(\mathcal{N}(0, 10)\)
    • \(\gamma\): \(\mathcal{N}(0, 10)\)
  • SD by Item
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 5)\)
    • \(\alpha\): \(\mathcal{N}(0, 10)\)
    • \(\gamma\): \(\mathcal{N}(0, 10)\)
  • Correlation
    • \(LKJ(2)\)

Weakly informative regularising priors were used for all terms. All priors were centred on 0, with standard deviations ranging from 0.5 to 10, thus allowing for a range of values with less prior probability places on extreme responses. Largely, these priors allow the posterior to be determined primarily by the data. For the slope terms, the priors assume no effect to small effects for each parameter in either direction. Weakly informative regularising priors were also used for all standard deviation terms. Finally, an \(LKJ(2)\) prior was used for the correlation between terms, which acts to down-weight perfect correlations (Vasishth et al., 2018 - CITATION). These priors are in some cases more informative than initially planned following our pre-registration (which used very weakly informative priors) to improve model fit (i.e. accounting for divergences during fitting). For example, the mu intercept and slope, and gamma slope have standard deviations half as large as planned, while the standard deviation for the phi intercept is three times as large as initially planned. Additionally, 8000 iterations were used instead of 1000 and 6 were used rather than 4 chains to improve estimates in response to warnings about bulk and tail effective sample size, totalling 48,000 samples rather than the planned 4000.

For both testing models, the following priors were used:

  • Intercept
    • \(\mu\): \(\mathcal{N}(0, 5)\)
    • \(\phi\): \(\mathcal{N}(0, 3)\)
    • \(\alpha\): \(\mathcal{logistic}(0, 1)\)
    • \(\gamma\): \(\mathcal{logistic}(0, 1)\)
  • Slope
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 1)\)
    • \(\alpha\): \(\mathcal{N}(0, 5)\)
    • \(\gamma\): \(\mathcal{N}(0, 1)\)
  • SD
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 1)\)
    • \(\alpha\): \(\mathcal{N}(0, 5)\)
    • \(\gamma\): \(\mathcal{N}(0, 5)\)
  • SD by Participant Number
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 5)\)
    • \(\alpha\): \(\mathcal{N}(0, 10)\)
    • \(\gamma\): \(\mathcal{N}(0, 10)\)
  • SD by Item
    • \(\mu\): \(\mathcal{N}(0, 1)\)
    • \(\phi\): \(\mathcal{N}(0, 5)\)
    • \(\alpha\): \(\mathcal{N}(0, 10)\)
    • \(\gamma\): \(\mathcal{N}(0, 10)\)
  • Correlation
    • \(LKJ(2)\)

Due to having more observations for analyses during the testing phase, both the \(\mu\) and \(\gamma\) slope terms use more weakly informative priors than the exposure model. This allows the data to have a larger impact on parameter estimates while having no impact on model convergence.

Model Checks

Posterior predictive checks were performed for all three models, comparing the observed posterior density against samples from the fitted model. Well fitting models show concordance between observed and sampled posterior densities. Plots for each model are displayed below. Grey lines indicate samples from the posterior, while black lines indicate the observed sample density.

As can be seen from the plots, the posterior predictive checks indicate a generally good model fit in all instances, such that the model largely captures the shape of the data (i.e. especially capturing the 0 and 1 inflation in the testing model), but does not capture some discrepancies in the data which do not arise from any particular process (i.e. some larger densities in the testing model between the range of 0-1).

Vocabulary Test Model

A summary of the population-level (fixed) effects for the Vocabulary Test model is provided below. This can be used to determine model diagnostics, coefficients, and estimates around these coefficients using 95% credible intervals. To answer questions pertaining to our pre-registered hypotheses, and to generate plots for these summaries, we used draws from the posterior for different combinations of conditions using the tidybayes [Version 1.1.0] R-package.

In all following plots and reported statistics, summaries are provided for for the joint posterior of the model taking into all distributional parameters during sampling. This provides an overall nLED for any comparison, rather than separate estimates of nLEDs between the bounds of 0 and 1 and for the extremes of 0 and 1. For reported results in tables, estimates are based on the median and credible interval around the median. The median was selected to summarise these models over the mean as this method is more robust to distributions with more than one mode. Thus, we do not provide individual statistics and plots for the individual distributional terms (e.g. for zero-one inflation, or conditional-one inflation) as we did not specify any hypotheses related to these individual terms. Instead, the zero-one inflated Beta models are used purely to improve model fit and to make more accurate predictions about the overall differences in nLEDs across conditions. Here, 90% credible intervals are used to summarise uncertainty in the estimates are more stable than wider intervals when given a limited number of draws from the posterior (Kruschke, 2014).

To determine support for hypotheses using these estimates, the probability of direction \(P(direction)\), or pd, is provided. This is defined as the proportion of the posterior that is of the same sign as the median. In previous simulations, the pd has been found to be linearly related to the frequentist p-value (Makowski et al., 2019). The pd therefore provides an index of the existence of an effect outlining certainty in whether an effect is positive or negative. This can be used to ultimately reject the null hypothesis, but like the frequentist p-value does not give a reliable estimate of evidence in support of the null hypothesis. Additional hypothesis tests are provided in the form of Region of Practical Equivalence (ROPE) analyses from these draws using the bayestestR [Version 0.4.0] R-package. This defines an area around the point null that is practically equivalent to zero for assessing evidence in support of the null hypothesis (Krushke, 2014). Here, the bounds of the ROPE range are defined as half the smallest effect reported in the Williams et al. (forthcoming) parameter estimates and intervals report the 90% highest density interval (HDI) of the posterior. The HDI differs from the equal tailed intervals used for summary statistics in that values within the range are always more probable than values outside of the range, and the interval need not exclude an equal amount of the distribution towards both tails. With symmetric distributions, the two methods produce similar results.

In plots, posterior medians and 80% and 90% credible intervals are provided for different conditions in the plots below. Table summaries also provide posterior medians along with 90% credible intervals.

In the tables of population level (fixed) effects, \(\hat{R}\) is a measure of convergence for within- and between-chain estimates, with values closer to 1 being preferable. The bulk and tail effective sample sizes give diagnostics of the number of draws which contain the same amount of information as the dependent sample (CITE STAN WEBSITE), with higher values being preferable. The tail effective sample size is determined at the 5% and 95% quantiles, while the bulk is determined at values in between these quantiles.

Parameter Estimate Est.error Interval Rhat Bulk Ess Tail Ess
\(\mu\)
Intercept 0.341 0.029 [0.284, 0.396] 1.001 7824 12363
Variety Exposure1 0.027 0.037 [-0.046, 0.098] 1.000 6736 11607
Variety Exposure2 -0.042 0.037 [-0.115, 0.031] 1.001 7308 12508
Variety Exposure3 0.002 0.036 [-0.070, 0.074] 1.000 6644 11987
Word Type 0.030 0.022 [-0.013, 0.074] 1.000 10320 15405
Variety Exposure1 : Word Type -0.002 0.021 [-0.043, 0.039] 1.000 30108 19005
Variety Exposure2 : Word Type -0.002 0.021 [-0.044, 0.039] 1.000 28808 19137
Variety Exposure3 : Word Type 0.002 0.020 [-0.038, 0.042] 1.000 31508 19390
\(\phi\)
Intercept 1.819 0.033 [1.756, 1.884] 1.002 5350 13365
Variety Exposure1 -0.011 0.041 [-0.093, 0.069] 1.000 12336 18285
Variety Exposure2 -0.055 0.041 [-0.136, 0.026] 1.000 12602 16852
Variety Exposure3 0.016 0.040 [-0.063, 0.094] 1.000 13794 17082
Word Type 0.004 0.029 [-0.054, 0.062] 1.000 10054 17796
Variety Exposure1 : Word Type -0.025 0.037 [-0.099, 0.047] 1.000 29030 19408
Variety Exposure2 : Word Type 0.006 0.036 [-0.065, 0.076] 1.000 29955 19532
Variety Exposure3 : Word Type 0.040 0.035 [-0.029, 0.110] 1.000 31022 20460
\(\alpha\)
Intercept -0.140 0.099 [-0.336, 0.052] 1.000 15525 17181
Variety Exposure1 0.089 0.088 [-0.084, 0.261] 1.000 13530 17076
Variety Exposure2 0.047 0.085 [-0.120, 0.213] 1.001 13206 16313
Variety Exposure3 -0.098 0.084 [-0.264, 0.067] 1.000 13104 15637
Word Type -0.075 0.088 [-0.250, 0.095] 1.000 14507 15504
Variety Exposure1 : Word Type -0.008 0.050 [-0.107, 0.091] 1.000 25967 18558
Variety Exposure2 : Word Type -0.004 0.044 [-0.090, 0.083] 1.000 31103 19435
Variety Exposure3 : Word Type 0.049 0.045 [-0.040, 0.138] 1.000 29087 18243
\(\gamma\)
Intercept 1.152 0.200 [0.761, 1.550] 1.000 11742 16173
Variety Exposure1 0.190 0.216 [-0.232, 0.615] 1.000 11829 16574
Variety Exposure2 -0.259 0.192 [-0.638, 0.118] 1.000 9681 14910
Variety Exposure3 0.065 0.196 [-0.319, 0.448] 1.000 9621 14270
Word Type 0.216 0.160 [-0.102, 0.532] 1.000 15056 16693
Variety Exposure1 : Word Type -0.048 0.140 [-0.325, 0.227] 1.000 21307 18614
Variety Exposure2 : Word Type -0.074 0.086 [-0.243, 0.094] 1.000 31341 17516
Variety Exposure3 : Word Type 0.007 0.095 [-0.179, 0.196] 1.000 25910 19286

Variety Exposure

Posterior means and 90% credible intervals are provided in the table below.

Variety Exposure Median Percentile Interval
Variety Match 0.689 [0.623, 0.736]
Variety Mismatch 0.638 [0.568, 0.693]
Variety Mismatch Social 0.666 [0.600, 0.718]
Dialect Literacy 0.665 [0.577, 0.723]

The differences between conditions were compared using the compare_levels() function from the tidybayes [Version 1.1.0] R-package. This allows for a direct comparison of differences between groups, which provides a more accurate and reliable method of establishing group differences than visual inspection of whether credible intervals overlap from estimates of the individual groups (Schenker & Gentleman, 2001). Here, the posterior is summarised as the median and 90% credible interval around the median. We also evaluated equivalence in the nLEDs between variety exposure conditions by determining a region of practical equivalence (ROPE) by which any effects between the reported bounds are deemed to be practically equivalent to 0. In all instances, this is determined at the 90% credible interval (CI) bound of the highest density interval (HDI). We report the proportion of the HDI contained within the ROPE region along with bounds of this interval. Where HDIs are entirely contained by the equivalence bounds, equivalence is accepted. Where HDIs are entirely outside the equivalence bounds, equivalence is rejected. Uncertainty is assigned to any HDIs that cross the equivalence bounds in either (or both) directions. Finally, the probability of direction is reported, showing the proportion of the posterior that is of the median’s sign (i.e. how much of the posterior supports a positive or negative effect).

Variety Exposure Median Percentile Interval Rope Percentage HDI Interval P(Direction)
Variety Match - Variety Mismatch 0.049 [-0.01, 0.11] 33.103 [-0.01, 0.11] 90.854
Variety Match - Variety Mismatch Social 0.021 [-0.04, 0.08] 65.936 [-0.04, 0.08] 73.044
Variety Match - Dialect Literacy 0.024 [-0.04, 0.10] 57.841 [-0.05, 0.10] 71.042
Variety Mismatch - Variety Mismatch Social -0.028 [-0.09, 0.03] 58.489 [-0.09, 0.03] 78.383
Variety Mismatch - Dialect Literacy -0.025 [-0.09, 0.05] 56.550 [-0.10, 0.04] 72.650
Variety Mismatch Social - Dialect Literacy 0.003 [-0.06, 0.07] 71.961 [-0.06, 0.07] 52.710
Note:
ROPE range = [-0.035, 0.035]. ROPE determined at the 90% CI of the HDI.

While nLEDs in the Variety Mismatch condition are generally lower than those in the Variety Match condition, 30% of the difference scores are contained by the equivalence bounds, and approximately 91% of the difference is of the same sign as the median.

Similarly, while nLEDs are generally higher in the two intervention conditions (Variety Mismatch Social and Dialect Literacy) when compared to the Variety Mismatch condition, around half of the difference scores are contained by the equivalence bounds. All other differences are largely undecided.

Word Type by Variety Exposure

We also looked at whether there are any differences in performance for different word types across conditions during the vocabulary testing phase.

Posterior means and credible intervals are provided in the table below.

Variety Exposure Word Type Median Percentile Interval
Variety Match Non-Contrastive 0.675 [0.611, 0.725]
Variety Match Contrastive 0.700 [0.648, 0.741]
Variety Mismatch Non-Contrastive 0.623 [0.555, 0.679]
Variety Mismatch Contrastive 0.652 [0.593, 0.700]
Variety Mismatch Social Non-Contrastive 0.645 [0.587, 0.692]
Variety Mismatch Social Contrastive 0.686 [0.637, 0.726]
Dialect Literacy Non-Contrastive 0.635 [0.561, 0.695]
Dialect Literacy Contrastive 0.689 [0.634, 0.731]

Comparisons between levels of word type during the exposure phase are shown for each variety exposure condition below. Here, all effects span either side of 0, suggesting that there is no reliable difference between word type by variety exposure condition in the exposure phase. In all instances there is some evidence that performance is better for non-contrastive words relative to contrastive words. However, the HDI around the parameter spans zero in all instances, and only a small proportion of the HDI is contained by the ROPE range.

Variety Exposure Word Type Median Percentile Interval Rope Percentage HDI Interval P(Direction)
Variety Match Contrastive - Non-Contrastive 0.025 [-0.03, 0.08] 38.730 [-0.03, 0.08] 77.388
Variety Mismatch Contrastive - Non-Contrastive 0.029 [-0.03, 0.09] 35.063 [-0.03, 0.09] 79.358
Variety Mismatch Social Contrastive - Non-Contrastive 0.040 [-0.01, 0.09] 22.920 [-0.01, 0.09] 91.096
Dialect Literacy Contrastive - Non-Contrastive 0.053 [-0.01, 0.12] 16.106 [-0.01, 0.12] 92.008
Note:
ROPE range = [-0.02, 0.02]. ROPE determined at the 90% CI of the HDI.

Testing Phase Model

A summary of the Testing Phase model is provided below. This can be used to determine model diagnostics and coefficients. As with the Vocabulary Test Model, to answer questions pertaining to our pre-registered hypotheses, and to generate plots for these summaries, we used draws from the posterior for different combinations of conditions. Similarly, hypothesis tests are provided in the form of ROPE analyses and pd.

Parameter Estimate Est.error Interval Rhat Bulk Ess Tail Ess
\(\mu\)
Intercept -0.393 0.039 [-0.470, -0.317] 1.009 872 2362
Task -0.057 0.009 [-0.075, -0.040] 1.002 4293 10471
Variety Exposure1 -0.030 0.056 [-0.138, 0.080] 1.009 760 1537
Variety Exposure2 -0.025 0.056 [-0.134, 0.083] 1.005 976 1969
Variety Exposure3 0.028 0.055 [-0.081, 0.134] 1.007 882 1478
Word Type -0.003 0.022 [-0.046, 0.039] 1.002 3844 7230
Word Familiarity -0.002 0.014 [-0.030, 0.026] 1.002 3580 7359
Task : Variety Exposure1 0.007 0.015 [-0.022, 0.037] 1.001 4333 9246
Task : Variety Exposure2 0.021 0.015 [-0.009, 0.050] 1.000 5496 11559
Task : Variety Exposure3 -0.011 0.015 [-0.040, 0.018] 1.001 4634 11692
Task : Word Type -0.007 0.006 [-0.018, 0.006] 1.001 27098 16538
Task : Word Familiarity 0.010 0.004 [0.003, 0.017] 1.000 23157 17466
Variety Exposure1 : Word Type 0.010 0.011 [-0.012, 0.031] 1.000 20350 18569
Variety Exposure2 : Word Type 0.021 0.011 [-0.002, 0.043] 1.000 19552 16539
Variety Exposure3 : Word Type -0.006 0.011 [-0.028, 0.016] 1.000 21319 19307
Variety Exposure1 : Word Familiarity -0.007 0.009 [-0.025, 0.011] 1.003 2782 7124
Variety Exposure2 : Word Familiarity 0.001 0.009 [-0.018, 0.019] 1.001 3279 8546
Variety Exposure3 : Word Familiarity -0.004 0.010 [-0.022, 0.015] 1.001 3022 7788
Task : Variety Exposure1 : Word Type 0.005 0.011 [-0.015, 0.026] 1.000 22200 18142
Task : Variety Exposure2 : Word Type 0.005 0.010 [-0.015, 0.026] 1.000 20607 18319
Task : Variety Exposure3 : Word Type -0.013 0.010 [-0.033, 0.008] 1.000 23328 18455
Task : Variety Exposure1 : Word Familiarity -0.006 0.006 [-0.018, 0.006] 1.000 22972 18686
Task : Variety Exposure2 : Word Familiarity -0.004 0.006 [-0.017, 0.008] 1.000 21835 19248
Task : Variety Exposure3 : Word Familiarity 0.002 0.006 [-0.011, 0.014] 1.000 21982 19288
\(\phi\)
Intercept 2.643 0.044 [2.558, 2.731] 1.003 2173 5803
Task -0.184 0.020 [-0.222, -0.146] 1.000 10289 17192
Variety Exposure1 0.065 0.059 [-0.052, 0.179] 1.003 2015 3984
Variety Exposure2 -0.031 0.059 [-0.146, 0.084] 1.002 2006 4967
Variety Exposure3 0.011 0.058 [-0.101, 0.127] 1.002 2415 5978
Word Type -0.003 0.033 [-0.069, 0.061] 1.000 8299 12907
Word Familiarity 0.095 0.023 [0.050, 0.140] 1.001 7498 12119
Task : Variety Exposure1 0.022 0.031 [-0.039, 0.084] 1.001 9083 13062
Task : Variety Exposure2 0.009 0.032 [-0.054, 0.071] 1.000 8361 14215
Task : Variety Exposure3 -0.039 0.031 [-0.100, 0.024] 1.000 7657 13387
Task : Word Type -0.017 0.014 [-0.045, 0.011] 1.000 24341 18144
Task : Word Familiarity 0.009 0.009 [-0.009, 0.028] 1.000 20434 16978
Variety Exposure1 : Word Type 0.024 0.026 [-0.028, 0.074] 1.000 19798 17214
Variety Exposure2 : Word Type -0.020 0.025 [-0.069, 0.029] 1.000 20696 17910
Variety Exposure3 : Word Type -0.024 0.027 [-0.077, 0.030] 1.000 17569 17857
Variety Exposure1 : Word Familiarity 0.019 0.020 [-0.020, 0.059] 1.000 12286 16528
Variety Exposure2 : Word Familiarity -0.026 0.020 [-0.065, 0.012] 1.000 12205 16205
Variety Exposure3 : Word Familiarity -0.001 0.021 [-0.042, 0.041] 1.000 12006 14273
Task : Variety Exposure1 : Word Type 0.047 0.024 [-0.001, 0.094] 1.000 22267 18108
Task : Variety Exposure2 : Word Type -0.001 0.024 [-0.048, 0.047] 1.000 21631 18927
Task : Variety Exposure3 : Word Type -0.001 0.024 [-0.048, 0.046] 1.000 22569 18027
Task : Variety Exposure1 : Word Familiarity 0.003 0.016 [-0.028, 0.033] 1.000 22592 18688
Task : Variety Exposure2 : Word Familiarity 0.004 0.016 [-0.027, 0.035] 1.000 19848 17845
Task : Variety Exposure3 : Word Familiarity -0.013 0.016 [-0.042, 0.018] 1.000 21878 18709
\(\alpha\)
Intercept -0.322 0.127 [-0.571, -0.070] 1.002 4357 9644
Task 0.276 0.027 [0.224, 0.329] 1.000 5887 12534
Variety Exposure1 0.030 0.111 [-0.189, 0.247] 1.002 2390 4549
Variety Exposure2 0.027 0.112 [-0.192, 0.246] 1.001 2437 6047
Variety Exposure3 0.009 0.111 [-0.206, 0.229] 1.002 3048 6900
Word Type 0.129 0.128 [-0.123, 0.383] 1.000 8811 12646
Word Familiarity -0.019 0.081 [-0.179, 0.139] 1.001 8079 13276
Task : Variety Exposure1 0.076 0.047 [-0.015, 0.167] 1.001 6314 12914
Task : Variety Exposure2 0.001 0.047 [-0.091, 0.092] 1.000 6271 12575
Task : Variety Exposure3 0.000 0.046 [-0.090, 0.091] 1.001 6217 10116
Task : Word Type 0.077 0.018 [0.043, 0.111] 1.000 33338 18081
Task : Word Familiarity -0.026 0.011 [-0.048, -0.005] 1.000 32607 18646
Variety Exposure1 : Word Type -0.071 0.038 [-0.145, 0.004] 1.000 15363 16866
Variety Exposure2 : Word Type -0.013 0.037 [-0.085, 0.060] 1.000 16414 17588
Variety Exposure3 : Word Type -0.084 0.037 [-0.157, -0.013] 1.000 15749 16738
Variety Exposure1 : Word Familiarity -0.046 0.029 [-0.104, 0.010] 1.000 10467 14463
Variety Exposure2 : Word Familiarity -0.016 0.029 [-0.072, 0.040] 1.000 9434 14577
Variety Exposure3 : Word Familiarity 0.066 0.028 [0.010, 0.121] 1.000 9835 15193
Task : Variety Exposure1 : Word Type -0.078 0.030 [-0.137, -0.019] 1.000 24660 18592
Task : Variety Exposure2 : Word Type 0.033 0.030 [-0.026, 0.091] 1.000 25700 18775
Task : Variety Exposure3 : Word Type 0.007 0.030 [-0.052, 0.067] 1.000 24559 18031
Task : Variety Exposure1 : Word Familiarity 0.013 0.020 [-0.024, 0.052] 1.000 26206 18406
Task : Variety Exposure2 : Word Familiarity -0.030 0.019 [-0.068, 0.008] 1.000 26062 18774
Task : Variety Exposure3 : Word Familiarity 0.024 0.019 [-0.014, 0.061] 1.000 26449 18581
\(\gamma\)
Intercept -3.865 0.395 [-4.663, -3.129] 1.004 1395 4218
Task -0.336 0.215 [-0.741, 0.106] 1.001 7653 11945
Variety Exposure1 -0.498 0.479 [-1.427, 0.448] 1.007 998 2146
Variety Exposure2 0.067 0.478 [-0.873, 0.994] 1.004 1216 2812
Variety Exposure3 0.213 0.471 [-0.714, 1.125] 1.006 1131 2431
Word Type -0.450 0.213 [-0.863, -0.026] 1.000 12521 15233
Word Familiarity 0.101 0.191 [-0.298, 0.455] 1.000 7840 11954
Task : Variety Exposure1 0.453 0.229 [0.006, 0.907] 1.001 5693 14185
Task : Variety Exposure2 -0.296 0.222 [-0.734, 0.140] 1.000 5655 13101
Task : Variety Exposure3 0.041 0.229 [-0.414, 0.488] 1.001 6168 13407
Task : Word Type -0.004 0.095 [-0.189, 0.185] 1.000 20633 17428
Task : Word Familiarity 0.123 0.087 [-0.051, 0.293] 1.000 20958 18994
Variety Exposure1 : Word Type 0.158 0.176 [-0.186, 0.502] 1.000 16019 16929
Variety Exposure2 : Word Type 0.210 0.172 [-0.128, 0.545] 1.000 15381 17442
Variety Exposure3 : Word Type 0.013 0.182 [-0.342, 0.369] 1.000 15063 16743
Variety Exposure1 : Word Familiarity -0.116 0.184 [-0.477, 0.241] 1.001 8137 13815
Variety Exposure2 : Word Familiarity 0.046 0.172 [-0.292, 0.385] 1.000 7207 12831
Variety Exposure3 : Word Familiarity 0.044 0.189 [-0.331, 0.416] 1.001 7974 13449
Task : Variety Exposure1 : Word Type 0.175 0.148 [-0.116, 0.466] 1.000 20221 18302
Task : Variety Exposure2 : Word Type 0.018 0.144 [-0.265, 0.302] 1.000 18800 18348
Task : Variety Exposure3 : Word Type -0.055 0.153 [-0.360, 0.245] 1.000 17793 17422
Task : Variety Exposure1 : Word Familiarity 0.498 0.149 [0.211, 0.798] 1.000 15651 15454
Task : Variety Exposure2 : Word Familiarity -0.390 0.133 [-0.661, -0.139] 1.000 16184 17639
Task : Variety Exposure3 : Word Familiarity -0.033 0.153 [-0.334, 0.268] 1.000 16146 16892

Variety Exposure

Posterior means and credible intervals are provided in the table below.

Variety Exposure Median Percentile Interval
Variety Match 0.240 [0.155, 0.317]
Variety Mismatch 0.243 [0.157, 0.333]
Variety Mismatch Social 0.253 [0.169, 0.340]
Dialect Literacy 0.264 [0.152, 0.361]
Variety Exposure Median Percentile Interval Rope Percentage HDI Interval P(Direction)
Variety Match - Variety Mismatch -0.005 [-0.07, 0.06] 72.284 [-0.07, 0.06] 55.597
Variety Match - Variety Mismatch Social -0.018 [-0.08, 0.05] 63.918 [-0.08, 0.05] 67.399
Variety Match - Dialect Literacy -0.021 [-0.11, 0.05] 55.987 [-0.10, 0.05] 66.515
Variety Mismatch - Variety Mismatch Social -0.013 [-0.07, 0.06] 67.384 [-0.08, 0.06] 63.347
Variety Mismatch - Dialect Literacy -0.017 [-0.10, 0.06] 57.955 [-0.10, 0.06] 64.491
Variety Mismatch Social - Dialect Literacy -0.006 [-0.09, 0.07] 57.633 [-0.09, 0.07] 54.963
Note:
ROPE range = [-0.035, 0.035]. ROPE determined at the 90% CI of the HDI.

While the ROPE is undecided, there does not seem to be any reliable differences across the Variety Exposure conditions in regards to overall performance.

Variety Exposure for Novel Words Only

Posterior means and credible intervals are provided in the table below.

Variety Exposure Median Percentile Interval
Variety Match 0.253 [0.179, 0.320]
Variety Mismatch 0.250 [0.179, 0.373]
Variety Mismatch Social 0.245 [0.167, 0.344]
Dialect Literacy 0.266 [0.198, 0.364]
Variety Exposure Median Percentile Interval Rope Percentage HDI Interval P(Direction)
Variety Match - Variety Mismatch -0.005 [-0.10, 0.07] 61.313 [-0.09, 0.08] 54.158
Variety Match - Variety Mismatch Social 0.004 [-0.08, 0.08] 63.964 [-0.08, 0.08] 53.400
Variety Match - Dialect Literacy -0.018 [-0.10, 0.06] 58.339 [-0.10, 0.06] 66.010
Variety Mismatch - Variety Mismatch Social 0.009 [-0.07, 0.10] 59.901 [-0.08, 0.10] 58.148
Variety Mismatch - Dialect Literacy -0.015 [-0.10, 0.09] 54.920 [-0.10, 0.08] 61.860
Variety Mismatch Social - Dialect Literacy -0.023 [-0.11, 0.06] 52.834 [-0.11, 0.06] 68.623
Note:
ROPE range = [-0.035, 0.035]. ROPE determined at the 90% CI of the HDI.

A similar pattern appears for the novel words only as with all words.

Word Type by Variety Exposure

We also looked at whether there are any differences in performance for different word types across conditions during the testing phase. There appears to be some differences in word types within groups, but are these differences reliable?

Posterior means and credible intervals are provided in the table below.

Task Variety Exposure Word Type Median Percentile Interval
Reading Variety Match Non-Contrastive 0.178 [0.138, 0.225]
Reading Variety Match Contrastive 0.189 [0.147, 0.236]
Reading Variety Mismatch Non-Contrastive 0.173 [0.132, 0.221]
Reading Variety Mismatch Contrastive 0.215 [0.170, 0.262]
Reading Variety Mismatch Social Non-Contrastive 0.193 [0.150, 0.240]
Reading Variety Mismatch Social Contrastive 0.233 [0.185, 0.283]
Reading Dialect Literacy Non-Contrastive 0.166 [0.126, 0.210]
Reading Dialect Literacy Contrastive 0.262 [0.215, 0.312]
Spelling Variety Match Non-Contrastive 0.266 [0.220, 0.314]
Spelling Variety Match Contrastive 0.282 [0.234, 0.338]
Spelling Variety Mismatch Non-Contrastive 0.273 [0.227, 0.325]
Spelling Variety Mismatch Contrastive 0.274 [0.226, 0.330]
Spelling Variety Mismatch Social Non-Contrastive 0.296 [0.247, 0.348]
Spelling Variety Mismatch Social Contrastive 0.295 [0.242, 0.358]
Spelling Dialect Literacy Non-Contrastive 0.265 [0.217, 0.319]
Spelling Dialect Literacy Contrastive 0.325 [0.269, 0.398]

We can also directly compare the differences in performance for contrastive words relative to non-contrastive words.

Task Variety Exposure Word Type Median Percentile Interval Rope Percentage HDI Interval P(Direction)
Reading Variety Match Contrastive - Non-Contrastive 0.010 [-0.04, 0.06] 55.465 [-0.03, 0.06] 64.850
Reading Variety Mismatch Contrastive - Non-Contrastive 0.041 [-0.00, 0.09] 19.416 [-0.00, 0.09] 93.071
Reading Variety Mismatch Social Contrastive - Non-Contrastive 0.039 [-0.01, 0.09] 22.564 [-0.01, 0.09] 91.375
Reading Dialect Literacy Contrastive - Non-Contrastive 0.095 [0.05, 0.14] 0.000 [0.05, 0.14] 99.946
Spelling Variety Match Contrastive - Non-Contrastive 0.016 [-0.03, 0.06] 49.938 [-0.03, 0.06] 71.429
Spelling Variety Mismatch Contrastive - Non-Contrastive 0.001 [-0.05, 0.05] 57.044 [-0.04, 0.05] 51.167
Spelling Variety Mismatch Social Contrastive - Non-Contrastive 0.000 [-0.05, 0.05] 53.303 [-0.05, 0.05] 50.187
Spelling Dialect Literacy Contrastive - Non-Contrastive 0.060 [0.01, 0.12] 6.953 [0.00, 0.12] 97.225
Note:
ROPE range = [-0.02, 0.02]. ROPE determined at the 90% CI of the HDI.

These results reflect those in the plots above. Are any differences reported here reliable?

It seems that there is a clear effect of word type (for contrastive vs. non-contrastive words) for the reading task in the Dialect Literacy condition. Additionally, there is some evidence for an effect of word type in the reading task for the Variety Mismatch Social and Variety Mismatch conditions, with only around 20% of the HDI within the the equivalence bounds, and pds over 90%, indicating that around 90% of the posterior is of the median’s sign. There is also some evidence for a word type effect in the spelling task in the Dialect Literacy condition, with only around 7% of the HDI within the equivalence bounds, and a pd of over 97%. This suggests that reading performance is impaired for contrastive words only when participants are exposed to a dialect, while spelling performance is only impaired when exposed to a dialect literacy intervention.

However, all other contrasts show that a large proportion of the HDI is contained by the ROPE, which indicates that we have no strong evidence for a presence or absence of an effect of Word Type.

Finally, we ask whether or not the contrastive effect is stronger in the Variety Mismatch Social condition relative to the Variety Mismatch condition.

Task Variety Exposure Word Type Median Percentile Interval Rope Percentage HDI Interval P(Direction)
Reading Variety Mismatch Social - Variety Mismatch Contrastive - Non-Contrastive -0.002 [-0.03, 0.03] 84.765 [-0.03, 0.03] 55.142
Spelling Variety Mismatch Social - Variety Mismatch Contrastive - Non-Contrastive -0.001 [-0.04, 0.04] 73.936 [-0.04, 0.03] 52.179
Note:
ROPE range = [-0.02, 0.02]. ROPE determined at the 90% CI of the HDI.

Any differences here are slight, and are mainly contained by the ROPE in both tasks. This suggests that there are no substantial differences in the magnitude of the effect between contrastive and non-contrastive words across these two conditions.

Exploratory Covariate Testing Model

A summary of the Testing Phase model incorporating the mean scores in the vocabulary test as a covariate is provided below. This can be used to determine model diagnostics and coefficients. As with previous models, draws from the posterior for different combinations of conditions were taken using the tidybayes R-package. Similarly, hypothesis tests are provided in the form of Region of Practical Equivalence (ROPE) analyses from these draws using the bayestestR R-package. Extreme caution is needed for interpreting such hypothesis tests as the following models are purely exploratory.

Parameter Estimate Est.error Interval Rhat Bulk Ess Tail Ess
\(\mu\)
Intercept -0.693 0.086 [-0.861, -0.523] 1.004 1916 3997
Mean Vocabulary Test Nled 0.482 0.121 [0.243, 0.720] 1.002 2498 5228
Task -0.098 0.031 [-0.159, -0.038] 1.002 6745 11373
Variety Exposure1 -0.081 0.103 [-0.283, 0.122] 1.000 2563 5344
Variety Exposure2 -0.059 0.102 [-0.257, 0.140] 1.003 2621 5524
Variety Exposure3 -0.027 0.106 [-0.235, 0.177] 1.003 3048 6633
Word Type 0.047 0.042 [-0.035, 0.129] 1.001 6909 11300
Word Familiarity 0.041 0.028 [-0.015, 0.096] 1.002 5576 9887
Mean Vocabulary Test Nled : Task 0.065 0.047 [-0.027, 0.156] 1.002 6841 11173
Mean Vocabulary Test Nled : Variety Exposure1 0.043 0.148 [-0.246, 0.330] 1.001 2947 6340
Mean Vocabulary Test Nled : Variety Exposure2 0.091 0.152 [-0.211, 0.385] 1.001 3080 6242
Mean Vocabulary Test Nled : Variety Exposure3 0.086 0.155 [-0.220, 0.388] 1.001 3756 6996
Task : Variety Exposure1 -0.013 0.050 [-0.112, 0.085] 1.000 7520 12366
Task : Variety Exposure2 0.129 0.047 [0.037, 0.220] 1.001 7490 12189
Task : Variety Exposure3 -0.020 0.053 [-0.124, 0.086] 1.001 7289 11955
Mean Vocabulary Test Nled : Word Type -0.074 0.051 [-0.173, 0.026] 1.000 8413 13230
Mean Vocabulary Test Nled : Word Familiarity -0.069 0.036 [-0.138, 0.003] 1.001 6391 11737
Task : Word Type 0.026 0.024 [-0.021, 0.072] 1.000 13711 16671
Task : Word Familiarity 0.026 0.014 [-0.001, 0.053] 1.000 13588 16217
Variety Exposure1 : Word Type 0.045 0.043 [-0.039, 0.129] 1.000 8974 14041
Variety Exposure2 : Word Type 0.011 0.038 [-0.064, 0.085] 1.000 9970 13728
Variety Exposure3 : Word Type 0.056 0.044 [-0.030, 0.144] 1.000 10027 15117
Variety Exposure1 : Word Familiarity -0.010 0.030 [-0.069, 0.047] 1.001 6754 12410
Variety Exposure2 : Word Familiarity -0.024 0.028 [-0.079, 0.031] 1.001 7171 12105
Variety Exposure3 : Word Familiarity 0.029 0.033 [-0.035, 0.094] 1.000 7252 13006
Mean Vocabulary Test Nled : Task : Variety Exposure1 0.033 0.074 [-0.113, 0.178] 1.000 7556 12263
Mean Vocabulary Test Nled : Task : Variety Exposure2 -0.172 0.073 [-0.313, -0.029] 1.001 7498 12616
Mean Vocabulary Test Nled : Task : Variety Exposure3 0.012 0.080 [-0.146, 0.169] 1.001 7221 11443
Mean Vocabulary Test Nled : Task : Word Type -0.048 0.036 [-0.118, 0.021] 1.000 14242 16701
Mean Vocabulary Test Nled : Task : Word Familiarity -0.025 0.020 [-0.065, 0.015] 1.000 14006 16024
Mean Vocabulary Test Nled : Variety Exposure1 : Word Type -0.051 0.062 [-0.174, 0.071] 1.000 9228 14135
Mean Vocabulary Test Nled : Variety Exposure2 : Word Type 0.015 0.058 [-0.098, 0.130] 1.000 10637 14973
Mean Vocabulary Test Nled : Variety Exposure3 : Word Type -0.097 0.065 [-0.227, 0.029] 1.000 10299 15074
Mean Vocabulary Test Nled : Variety Exposure1 : Word Familiarity 0.007 0.043 [-0.078, 0.093] 1.001 7328 12247
Mean Vocabulary Test Nled : Variety Exposure2 : Word Familiarity 0.038 0.043 [-0.047, 0.121] 1.000 7262 12548
Mean Vocabulary Test Nled : Variety Exposure3 : Word Familiarity -0.052 0.049 [-0.149, 0.044] 1.000 7571 13132
Task : Variety Exposure1 : Word Type 0.064 0.042 [-0.019, 0.146] 1.001 8799 13967
Task : Variety Exposure2 : Word Type 0.016 0.038 [-0.057, 0.089] 1.001 9765 14421
Task : Variety Exposure3 : Word Type -0.062 0.043 [-0.146, 0.023] 1.000 10159 14892
Task : Variety Exposure1 : Word Familiarity -0.018 0.023 [-0.063, 0.028] 1.000 9589 13660
Task : Variety Exposure2 : Word Familiarity -0.014 0.022 [-0.056, 0.029] 1.001 10556 14908
Task : Variety Exposure3 : Word Familiarity -0.006 0.026 [-0.057, 0.045] 1.000 10195 15080
Mean Vocabulary Test Nled : Task : Variety Exposure1 : Word Type -0.086 0.062 [-0.206, 0.036] 1.001 9137 13855
Mean Vocabulary Test Nled : Task : Variety Exposure2 : Word Type -0.017 0.057 [-0.129, 0.094] 1.001 10359 14624
Mean Vocabulary Test Nled : Task : Variety Exposure3 : Word Type 0.078 0.064 [-0.049, 0.203] 1.000 10506 15284
Mean Vocabulary Test Nled : Task : Variety Exposure1 : Word Familiarity 0.019 0.034 [-0.048, 0.086] 1.000 9598 14866
Mean Vocabulary Test Nled : Task : Variety Exposure2 : Word Familiarity 0.015 0.034 [-0.050, 0.081] 1.000 10972 15294
Mean Vocabulary Test Nled : Task : Variety Exposure3 : Word Familiarity 0.011 0.039 [-0.066, 0.088] 1.000 10233 14349
\(\phi\)
Intercept 2.882 0.113 [2.661, 3.106] 1.001 5142 9843
Mean Vocabulary Test Nled -0.362 0.165 [-0.683, -0.039] 1.000 5370 10626
Task -0.316 0.069 [-0.452, -0.182] 1.001 8696 13997
Variety Exposure1 0.037 0.148 [-0.254, 0.324] 1.001 5402 9940
Variety Exposure2 0.100 0.141 [-0.176, 0.375] 1.001 6107 10795
Variety Exposure3 -0.183 0.153 [-0.483, 0.117] 1.002 5305 9197
Word Type -0.015 0.064 [-0.141, 0.110] 1.000 12885 15759
Word Familiarity 0.129 0.049 [0.035, 0.226] 1.000 9741 14882
Mean Vocabulary Test Nled : Task 0.207 0.103 [0.004, 0.410] 1.001 8274 13721
Mean Vocabulary Test Nled : Variety Exposure1 0.064 0.216 [-0.358, 0.485] 1.000 5342 10110
Mean Vocabulary Test Nled : Variety Exposure2 -0.229 0.214 [-0.649, 0.192] 1.001 6385 10913
Mean Vocabulary Test Nled : Variety Exposure3 0.308 0.228 [-0.145, 0.755] 1.002 5711 9797
Task : Variety Exposure1 -0.010 0.105 [-0.217, 0.195] 1.000 7508 13845
Task : Variety Exposure2 -0.197 0.102 [-0.397, 0.004] 1.001 8000 13761
Task : Variety Exposure3 0.063 0.109 [-0.151, 0.277] 1.000 7938 13728
Mean Vocabulary Test Nled : Word Type 0.010 0.084 [-0.157, 0.175] 1.000 15805 17652
Mean Vocabulary Test Nled : Word Familiarity -0.054 0.066 [-0.183, 0.074] 1.000 10730 14473
Task : Word Type -0.037 0.057 [-0.149, 0.076] 1.000 14295 16702
Task : Word Familiarity 0.001 0.036 [-0.070, 0.071] 1.000 13503 15904
Variety Exposure1 : Word Type 0.071 0.093 [-0.114, 0.255] 1.000 12546 15671
Variety Exposure2 : Word Type -0.048 0.091 [-0.225, 0.130] 1.000 11664 14006
Variety Exposure3 : Word Type -0.178 0.097 [-0.369, 0.010] 1.000 12025 15427
Variety Exposure1 : Word Familiarity -0.004 0.071 [-0.143, 0.134] 1.001 8795 14221
Variety Exposure2 : Word Familiarity -0.087 0.068 [-0.223, 0.047] 1.000 9496 13632
Variety Exposure3 : Word Familiarity -0.001 0.075 [-0.149, 0.145] 1.000 9006 14427
Mean Vocabulary Test Nled : Task : Variety Exposure1 0.045 0.154 [-0.255, 0.349] 1.000 7454 13363
Mean Vocabulary Test Nled : Task : Variety Exposure2 0.331 0.155 [0.026, 0.634] 1.000 8094 14049
Mean Vocabulary Test Nled : Task : Variety Exposure3 -0.164 0.162 [-0.482, 0.150] 1.000 7950 13077
Mean Vocabulary Test Nled : Task : Word Type 0.029 0.083 [-0.135, 0.193] 1.000 14613 16737
Mean Vocabulary Test Nled : Task : Word Familiarity 0.014 0.053 [-0.090, 0.117] 1.000 13943 16692
Mean Vocabulary Test Nled : Variety Exposure1 : Word Type -0.074 0.134 [-0.336, 0.190] 1.000 12628 16385
Mean Vocabulary Test Nled : Variety Exposure2 : Word Type 0.039 0.134 [-0.224, 0.303] 1.000 11930 15432
Mean Vocabulary Test Nled : Variety Exposure3 : Word Type 0.232 0.141 [-0.043, 0.510] 1.000 12148 15760
Mean Vocabulary Test Nled : Variety Exposure1 : Word Familiarity 0.034 0.102 [-0.166, 0.234] 1.001 8819 14203
Mean Vocabulary Test Nled : Variety Exposure2 : Word Familiarity 0.096 0.102 [-0.105, 0.300] 1.000 9577 14850
Mean Vocabulary Test Nled : Variety Exposure3 : Word Familiarity 0.000 0.111 [-0.215, 0.218] 1.000 8931 14479
Task : Variety Exposure1 : Word Type -0.015 0.092 [-0.195, 0.168] 1.000 10766 14940
Task : Variety Exposure2 : Word Type -0.067 0.089 [-0.242, 0.110] 1.000 11973 15099
Task : Variety Exposure3 : Word Type 0.153 0.096 [-0.034, 0.339] 1.000 10897 15422
Task : Variety Exposure1 : Word Familiarity 0.029 0.060 [-0.089, 0.147] 1.000 11580 15050
Task : Variety Exposure2 : Word Familiarity -0.047 0.060 [-0.165, 0.071] 1.000 10879 15650
Task : Variety Exposure3 : Word Familiarity 0.106 0.064 [-0.019, 0.232] 1.001 11116 15296
Mean Vocabulary Test Nled : Task : Variety Exposure1 : Word Type 0.085 0.134 [-0.180, 0.344] 1.000 10823 15344
Mean Vocabulary Test Nled : Task : Variety Exposure2 : Word Type 0.103 0.132 [-0.156, 0.359] 1.000 12272 15436
Mean Vocabulary Test Nled : Task : Variety Exposure3 : Word Type -0.235 0.140 [-0.509, 0.040] 1.000 11041 15011
Mean Vocabulary Test Nled : Task : Variety Exposure1 : Word Familiarity -0.044 0.087 [-0.214, 0.126] 1.000 11680 15176
Mean Vocabulary Test Nled : Task : Variety Exposure2 : Word Familiarity 0.075 0.090 [-0.101, 0.252] 1.000 11182 15369
Mean Vocabulary Test Nled : Task : Variety Exposure3 : Word Familiarity -0.177 0.095 [-0.362, 0.007] 1.001 11092 15198
\(\alpha\)
Intercept 0.582 0.238 [0.116, 1.040] 1.001 6207 11122
Mean Vocabulary Test Nled -1.397 0.310 [-2.001, -0.786] 1.001 6445 11522
Task 0.478 0.094 [0.292, 0.661] 1.001 8022 13257
Variety Exposure1 0.162 0.269 [-0.365, 0.696] 1.001 5973 9887
Variety Exposure2 0.037 0.262 [-0.484, 0.549] 1.002 6309 10686
Variety Exposure3 0.122 0.286 [-0.441, 0.681] 1.002 5831 8551
Word Type 0.404 0.166 [0.074, 0.727] 1.000 10057 13912
Word Familiarity -0.084 0.111 [-0.300, 0.137] 1.001 8832 12986
Mean Vocabulary Test Nled : Task -0.319 0.143 [-0.598, -0.033] 1.001 8085 13079
Mean Vocabulary Test Nled : Variety Exposure1 -0.141 0.400 [-0.932, 0.639] 1.001 6133 11371
Mean Vocabulary Test Nled : Variety Exposure2 -0.085 0.406 [-0.880, 0.724] 1.001 6540 11330
Mean Vocabulary Test Nled : Variety Exposure3 -0.173 0.431 [-1.025, 0.674] 1.002 5886 10085
Task : Variety Exposure1 0.232 0.148 [-0.057, 0.522] 1.001 7808 13332
Task : Variety Exposure2 0.077 0.140 [-0.196, 0.353] 1.000 7830 13202
Task : Variety Exposure3 -0.175 0.157 [-0.481, 0.132] 1.001 8239 12882
Mean Vocabulary Test Nled : Word Type -0.441 0.137 [-0.709, -0.171] 1.000 13598 16988
Mean Vocabulary Test Nled : Word Familiarity 0.101 0.103 [-0.101, 0.302] 1.000 9506 14408
Task : Word Type 0.205 0.062 [0.084, 0.328] 1.000 16890 18011
Task : Word Familiarity -0.132 0.039 [-0.208, -0.055] 1.000 16488 17445
Variety Exposure1 : Word Type -0.172 0.118 [-0.405, 0.063] 1.000 12073 15275
Variety Exposure2 : Word Type -0.111 0.112 [-0.332, 0.111] 1.000 12304 15479
Variety Exposure3 : Word Type 0.063 0.124 [-0.179, 0.308] 1.001 11352 15705
Variety Exposure1 : Word Familiarity -0.071 0.092 [-0.252, 0.111] 1.001 8382 13367
Variety Exposure2 : Word Familiarity -0.069 0.088 [-0.244, 0.103] 1.001 9191 13857
Variety Exposure3 : Word Familiarity 0.124 0.099 [-0.068, 0.322] 1.001 9004 13690
Mean Vocabulary Test Nled : Task : Variety Exposure1 -0.230 0.222 [-0.665, 0.201] 1.001 7853 12887
Mean Vocabulary Test Nled : Task : Variety Exposure2 -0.139 0.219 [-0.571, 0.293] 1.000 8249 13657
Mean Vocabulary Test Nled : Task : Variety Exposure3 0.278 0.239 [-0.190, 0.745] 1.001 8353 13431
Mean Vocabulary Test Nled : Task : Word Type -0.207 0.094 [-0.391, -0.023] 1.000 16241 17929
Mean Vocabulary Test Nled : Task : Word Familiarity 0.164 0.060 [0.047, 0.280] 1.000 16659 17691
Mean Vocabulary Test Nled : Variety Exposure1 : Word Type 0.169 0.176 [-0.177, 0.514] 1.000 11821 15283
Mean Vocabulary Test Nled : Variety Exposure2 : Word Type 0.152 0.174 [-0.193, 0.494] 1.000 12112 15809
Mean Vocabulary Test Nled : Variety Exposure3 : Word Type -0.233 0.189 [-0.606, 0.134] 1.001 11389 16455
Mean Vocabulary Test Nled : Variety Exposure1 : Word Familiarity 0.036 0.138 [-0.236, 0.308] 1.001 8574 14145
Mean Vocabulary Test Nled : Variety Exposure2 : Word Familiarity 0.091 0.137 [-0.179, 0.362] 1.001 9181 13600
Mean Vocabulary Test Nled : Variety Exposure3 : Word Familiarity -0.095 0.152 [-0.396, 0.199] 1.001 8822 13551
Task : Variety Exposure1 : Word Type -0.221 0.105 [-0.429, -0.021] 1.001 12942 15620
Task : Variety Exposure2 : Word Type 0.190 0.099 [-0.004, 0.386] 1.001 13698 17378
Task : Variety Exposure3 : Word Type -0.031 0.112 [-0.250, 0.185] 1.000 12826 16411
Task : Variety Exposure1 : Word Familiarity -0.043 0.066 [-0.171, 0.087] 1.000 13930 16447
Task : Variety Exposure2 : Word Familiarity 0.066 0.062 [-0.057, 0.189] 1.000 13637 16390
Task : Variety Exposure3 : Word Familiarity -0.144 0.072 [-0.287, -0.003] 1.000 12290 15462
Mean Vocabulary Test Nled : Task : Variety Exposure1 : Word Type 0.233 0.156 [-0.071, 0.541] 1.001 12904 15165
Mean Vocabulary Test Nled : Task : Variety Exposure2 : Word Type -0.263 0.155 [-0.565, 0.040] 1.000 13840 17030
Mean Vocabulary Test Nled : Task : Variety Exposure3 : Word Type 0.067 0.171 [-0.264, 0.406] 1.000 12847 16596
Mean Vocabulary Test Nled : Task : Variety Exposure1 : Word Familiarity 0.087 0.099 [-0.108, 0.279] 1.000 13951 15637
Mean Vocabulary Test Nled : Task : Variety Exposure2 : Word Familiarity -0.151 0.097 [-0.341, 0.040] 1.000 13658 16651
Mean Vocabulary Test Nled : Task : Variety Exposure3 : Word Familiarity 0.268 0.110 [0.053, 0.484] 1.000 12338 15805
\(\gamma\)
Intercept -5.895 0.637 [-7.162, -4.662] 1.003 2657 6409
Mean Vocabulary Test Nled 3.152 0.843 [1.492, 4.809] 1.001 4206 9626
Task -0.511 0.393 [-1.281, 0.270] 1.001 10075 14987
Variety Exposure1 -0.245 0.611 [-1.436, 0.944] 1.001 3262 7838
Variety Exposure2 0.046 0.608 [-1.144, 1.238] 1.002 3262 7675
Variety Exposure3 0.130 0.615 [-1.096, 1.326] 1.001 3126 7861
Word Type -0.423 0.359 [-1.127, 0.296] 1.000 14415 16572
Word Familiarity 0.367 0.323 [-0.282, 0.983] 1.000 10289 15030
Mean Vocabulary Test Nled : Task 0.255 0.541 [-0.809, 1.316] 1.001 12258 15498
Mean Vocabulary Test Nled : Variety Exposure1 -0.658 0.800 [-2.222, 0.902] 1.001 5961 11940
Mean Vocabulary Test Nled : Variety Exposure2 0.373 0.811 [-1.228, 1.969] 1.002 5441 11479
Mean Vocabulary Test Nled : Variety Exposure3 0.115 0.807 [-1.481, 1.687] 1.000 6310 12317
Task : Variety Exposure1 0.294 0.466 [-0.628, 1.203] 1.000 12681 16437
Task : Variety Exposure2 -0.053 0.465 [-0.966, 0.849] 1.000 13472 17028
Task : Variety Exposure3 0.182 0.464 [-0.731, 1.088] 1.000 12095 16536
Mean Vocabulary Test Nled : Word Type -0.043 0.512 [-1.068, 0.964] 1.000 14472 16650
Mean Vocabulary Test Nled : Word Familiarity -0.495 0.468 [-1.426, 0.419] 1.000 12431 15540
Task : Word Type -0.158 0.299 [-0.743, 0.431] 1.001 15499 17441
Task : Word Familiarity 0.612 0.260 [0.102, 1.120] 1.000 13680 16359
Variety Exposure1 : Word Type 0.119 0.432 [-0.727, 0.964] 1.000 14184 15990
Variety Exposure2 : Word Type -0.240 0.429 [-1.084, 0.599] 1.000 14480 16847
Variety Exposure3 : Word Type 0.150 0.441 [-0.705, 1.012] 1.001 14962 16635
Variety Exposure1 : Word Familiarity -0.382 0.408 [-1.183, 0.424] 1.000 11711 16109
Variety Exposure2 : Word Familiarity 0.488 0.397 [-0.289, 1.267] 1.000 12021 14108
Variety Exposure3 : Word Familiarity 0.115 0.407 [-0.680, 0.917] 1.000 12684 15885
Mean Vocabulary Test Nled : Task : Variety Exposure1 0.313 0.655 [-0.963, 1.600] 1.000 14261 17238
Mean Vocabulary Test Nled : Task : Variety Exposure2 -0.472 0.667 [-1.786, 0.843] 1.000 14551 16737
Mean Vocabulary Test Nled : Task : Variety Exposure3 -0.223 0.668 [-1.537, 1.077] 1.000 14374 17169
Mean Vocabulary Test Nled : Task : Word Type 0.242 0.431 [-0.599, 1.087] 1.001 15568 17338
Mean Vocabulary Test Nled : Task : Word Familiarity -0.751 0.394 [-1.536, 0.011] 1.000 13804 16781
Mean Vocabulary Test Nled : Variety Exposure1 : Word Type 0.035 0.605 [-1.150, 1.221] 1.000 14573 15758
Mean Vocabulary Test Nled : Variety Exposure2 : Word Type 0.704 0.612 [-0.490, 1.904] 1.000 14716 16716
Mean Vocabulary Test Nled : Variety Exposure3 : Word Type -0.197 0.629 [-1.434, 1.029] 1.000 15105 16416
Mean Vocabulary Test Nled : Variety Exposure1 : Word Familiarity 0.398 0.589 [-0.762, 1.552] 1.000 13039 15757
Mean Vocabulary Test Nled : Variety Exposure2 : Word Familiarity -0.668 0.578 [-1.814, 0.461] 1.000 13289 15567
Mean Vocabulary Test Nled : Variety Exposure3 : Word Familiarity -0.120 0.598 [-1.305, 1.052] 1.000 13911 16592
Task : Variety Exposure1 : Word Type 0.495 0.409 [-0.303, 1.304] 1.000 14738 17135
Task : Variety Exposure2 : Word Type 0.014 0.388 [-0.747, 0.771] 1.000 14793 16713
Task : Variety Exposure3 : Word Type 0.328 0.410 [-0.468, 1.131] 1.000 14522 17347
Task : Variety Exposure1 : Word Familiarity 0.782 0.369 [0.071, 1.517] 1.000 12302 15574
Task : Variety Exposure2 : Word Familiarity -0.684 0.366 [-1.397, 0.033] 1.001 13911 16526
Task : Variety Exposure3 : Word Familiarity -0.102 0.349 [-0.789, 0.590] 1.000 14012 16597
Mean Vocabulary Test Nled : Task : Variety Exposure1 : Word Type -0.436 0.565 [-1.551, 0.674] 1.000 14690 16983
Mean Vocabulary Test Nled : Task : Variety Exposure2 : Word Type -0.058 0.565 [-1.167, 1.055] 1.000 14979 17154
Mean Vocabulary Test Nled : Task : Variety Exposure3 : Word Type -0.563 0.590 [-1.718, 0.586] 1.000 14644 17017
Mean Vocabulary Test Nled : Task : Variety Exposure1 : Word Familiarity -0.390 0.536 [-1.442, 0.658] 1.000 12744 16179
Mean Vocabulary Test Nled : Task : Variety Exposure2 : Word Familiarity 0.443 0.546 [-0.632, 1.497] 1.001 14418 17411
Mean Vocabulary Test Nled : Task : Variety Exposure3 : Word Familiarity 0.056 0.535 [-0.995, 1.115] 1.000 14104 17489

We first explored whether mean vocabulary test performance (i.e. in terms of mean nLED) predicts testing performance, and whether or not this varies across Task, Variety Exposure condition, and Word Type. A plot of this relationship is shown below.

It’s quite difficult to make out an overall pattern here, so instead we performed a median split based on the vocabulary test performance, and we categorised these into participants with high and low nLEDs in the vocabulary test relative to the median score.

Word Type by Task, Variety Exposure, and Vocabulary Test Performance

First, we looked at the effect of word type across participants with high and low mean nLEDs in the vocabulary testing phase split by task and variety exposure condition.

Exposure Test nLED Group Task Variety Exposure Word Type Median Percentile Interval
Low Reading Variety Match Non-Contrastive 0.096 [0.036, 0.179]
Low Reading Variety Match Contrastive 0.102 [0.040, 0.187]
Low Reading Variety Mismatch Non-Contrastive 0.098 [0.035, 0.191]
Low Reading Variety Mismatch Contrastive 0.161 [0.086, 0.236]
Low Reading Variety Mismatch Social Non-Contrastive 0.105 [0.037, 0.201]
Low Reading Variety Mismatch Social Contrastive 0.161 [0.082, 0.243]
Low Reading Dialect Literacy Non-Contrastive 0.103 [0.043, 0.176]
Low Reading Dialect Literacy Contrastive 0.220 [0.142, 0.283]
Low Spelling Variety Match Non-Contrastive 0.209 [0.132, 0.278]
Low Spelling Variety Match Contrastive 0.242 [0.168, 0.311]
Low Spelling Variety Mismatch Non-Contrastive 0.221 [0.138, 0.299]
Low Spelling Variety Mismatch Contrastive 0.212 [0.125, 0.301]
Low Spelling Variety Mismatch Social Non-Contrastive 0.232 [0.135, 0.313]
Low Spelling Variety Mismatch Social Contrastive 0.239 [0.151, 0.318]
Low Spelling Dialect Literacy Non-Contrastive 0.234 [0.148, 0.315]
Low Spelling Dialect Literacy Contrastive 0.296 [0.212, 0.376]
High Reading Variety Match Non-Contrastive 0.232 [0.166, 0.315]
High Reading Variety Match Contrastive 0.244 [0.175, 0.335]
High Reading Variety Mismatch Non-Contrastive 0.255 [0.179, 0.352]
High Reading Variety Mismatch Contrastive 0.270 [0.205, 0.355]
High Reading Variety Mismatch Social Non-Contrastive 0.264 [0.190, 0.356]
High Reading Variety Mismatch Social Contrastive 0.291 [0.214, 0.402]
High Reading Dialect Literacy Non-Contrastive 0.218 [0.154, 0.307]
High Reading Dialect Literacy Contrastive 0.296 [0.229, 0.394]
High Spelling Variety Match Non-Contrastive 0.293 [0.239, 0.361]
High Spelling Variety Match Contrastive 0.298 [0.244, 0.371]
High Spelling Variety Mismatch Non-Contrastive 0.335 [0.264, 0.450]
High Spelling Variety Mismatch Contrastive 0.341 [0.267, 0.456]
High Spelling Variety Mismatch Social Non-Contrastive 0.339 [0.277, 0.420]
High Spelling Variety Mismatch Social Contrastive 0.338 [0.265, 0.457]
High Spelling Dialect Literacy Non-Contrastive 0.290 [0.231, 0.362]
High Spelling Dialect Literacy Contrastive 0.349 [0.275, 0.470]

There appears to be some differences across conditions here. Is this borne out in the data?

Exposure Test nLED Group Task Variety Exposure Word Type Median Percentile Interval Rope Percentage HDI Interval P(Direction)
Low Reading Variety Match Contrastive - Non-Contrastive 0.006 [-0.03, 0.05] 62.708 [-0.03, 0.04] 59.7
Low Reading Variety Mismatch Contrastive - Non-Contrastive 0.060 [0.02, 0.10] 0.888 [0.02, 0.10] 99.2
Low Reading Variety Mismatch Social Contrastive - Non-Contrastive 0.053 [0.01, 0.10] 7.880 [0.00, 0.10] 97.4
Low Reading Dialect Literacy Contrastive - Non-Contrastive 0.114 [0.07, 0.16] 0.000 [0.07, 0.16] 100.0
Low Spelling Variety Match Contrastive - Non-Contrastive 0.032 [-0.02, 0.09] 31.743 [-0.02, 0.09] 84.7
Low Spelling Variety Mismatch Contrastive - Non-Contrastive -0.008 [-0.06, 0.04] 53.496 [-0.06, 0.04] 59.5
Low Spelling Variety Mismatch Social Contrastive - Non-Contrastive 0.009 [-0.05, 0.06] 43.063 [-0.05, 0.07] 59.3
Low Spelling Dialect Literacy Contrastive - Non-Contrastive 0.063 [-0.00, 0.12] 10.433 [-0.00, 0.12] 93.6
High Reading Variety Match Contrastive - Non-Contrastive 0.012 [-0.04, 0.06] 50.721 [-0.04, 0.06] 66.0
High Reading Variety Mismatch Contrastive - Non-Contrastive 0.015 [-0.04, 0.07] 47.503 [-0.04, 0.06] 66.4
High Reading Variety Mismatch Social Contrastive - Non-Contrastive 0.027 [-0.03, 0.08] 37.292 [-0.03, 0.09] 77.3
High Reading Dialect Literacy Contrastive - Non-Contrastive 0.079 [0.02, 0.14] 0.000 [0.02, 0.14] 99.0
High Spelling Variety Match Contrastive - Non-Contrastive 0.006 [-0.05, 0.06] 51.942 [-0.04, 0.06] 57.4
High Spelling Variety Mismatch Contrastive - Non-Contrastive 0.006 [-0.05, 0.07] 43.618 [-0.05, 0.07] 57.1
High Spelling Variety Mismatch Social Contrastive - Non-Contrastive 0.000 [-0.06, 0.07] 44.950 [-0.06, 0.06] 50.4
High Spelling Dialect Literacy Contrastive - Non-Contrastive 0.059 [0.00, 0.14] 13.097 [-0.00, 0.13] 95.5
Note:
ROPE range = [-0.02, 0.02]. ROPE determined at the 90% CI of the HDI.

There is a clear word type effect in all of the dialect intervention conditions for the reading task, but only when participants have a low mean nLED score from the vocabulary test. When participants have a high nLED in the reading task, the only consistent word type effect is in the Dialect Literacy condition. In the spelling task, there is only a consistent word type effect for the Dialect Literacy condition regardless of performance in the vocabulary testing phase. Thus, performance is only worse for contrastive words in the reading task in the three dialect conditions when performance in the vocabulary test shows that the dialect form of the language is sufficiently entrenched. Poor performance in the vocabulary testing phase indicates that the dialect form of the language is not sufficiently entrenched prior to training in the standard form of the language so that no word type effects can occur. This effect is shown in both tasks and for participants with high and low nLEDs in the vocabulary testing phase for the Dialect Literacy condition because this condition interleaves the dialect form of the language with the standard form of the language during training (rather than front-loaded prior to the vocabulary test), which allows for sufficient entrenchment of the dialect form of the language to cause a great deal of local interference in both tasks.

Variety Exposure by Task and Vocabulary Test Performance for Novel Words Only

We next focussed on novel words to see whether any differences occur for novel word decoding. The following analyses summarise the patterns in the covariate model.

It’s clear from the figure that performance is generally worse in the testing phase for those with high mean nLEDs during the vocabulary test. Are there any differences if we directly compare these difference scores? There seems to be a pattern such that Variety Match and Mismatch differ for the spelling task, while all other contrasts seem to indicate equivalent performance across conditions. The following analysis looks at whether there are any differences between participants who had low and high nLEDs (relative to the median) in the vocabulary testing phase split by task and variety exposure condition.

Exposure Test nLED Group Task Variety Exposure Median Percentile Interval
Low Reading Variety Match 0.176 [0.091, 0.277]
Low Reading Variety Mismatch 0.179 [0.095, 0.255]
Low Reading Variety Mismatch Social 0.181 [0.102, 0.263]
Low Reading Dialect Literacy 0.207 [0.112, 0.294]
Low Spelling Variety Match 0.225 [0.135, 0.294]
Low Spelling Variety Mismatch 0.249 [0.152, 0.376]
Low Spelling Variety Mismatch Social 0.199 [0.085, 0.311]
Low Spelling Dialect Literacy 0.262 [0.169, 0.361]
High Reading Variety Match 0.251 [0.181, 0.351]
High Reading Variety Mismatch 0.266 [0.196, 0.345]
High Reading Variety Mismatch Social 0.233 [0.163, 0.330]
High Reading Dialect Literacy 0.273 [0.197, 0.375]
High Spelling Variety Match 0.297 [0.237, 0.367]
High Spelling Variety Mismatch 0.343 [0.249, 0.518]
High Spelling Variety Mismatch Social 0.335 [0.250, 0.475]
High Spelling Dialect Literacy 0.316 [0.227, 0.497]

We explored whether there are any differences between the variety exposure conditions in terms of how much the low and high performers vary. Is there any difference by Varity Exposure groups for their difference scores between high and low performers?

Exposure Test nLED Group Task Variety Exposure Median Percentile Interval Rope Percentage HDI Interval P(Direction)
High - Low Reading Variety Mismatch - Variety Match 0.011 [-0.09, 0.11] 31.964 [-0.09, 0.12] 57.9
High - Low Reading Variety Mismatch Social - Variety Match -0.023 [-0.13, 0.08] 24.417 [-0.14, 0.08] 64.4
High - Low Reading Dialect Literacy - Variety Match -0.006 [-0.13, 0.12] 24.639 [-0.13, 0.12] 53.2
High - Low Reading Variety Mismatch Social - Variety Mismatch -0.034 [-0.14, 0.07] 23.529 [-0.14, 0.06] 72.9
High - Low Reading Dialect Literacy - Variety Mismatch -0.018 [-0.14, 0.10] 24.306 [-0.14, 0.09] 59.1
High - Low Reading Dialect Literacy - Variety Mismatch Social 0.017 [-0.11, 0.14] 22.309 [-0.11, 0.14] 58.7
High - Low Spelling Variety Mismatch - Variety Match 0.020 [-0.09, 0.17] 24.417 [-0.10, 0.16] 61.3
High - Low Spelling Variety Mismatch Social - Variety Match 0.069 [-0.03, 0.18] 14.539 [-0.04, 0.17] 88.3
High - Low Spelling Dialect Literacy - Variety Match -0.017 [-0.14, 0.13] 18.424 [-0.14, 0.13] 57.4
High - Low Spelling Variety Mismatch Social - Variety Mismatch 0.049 [-0.10, 0.18] 18.646 [-0.08, 0.19] 72.8
High - Low Spelling Dialect Literacy - Variety Mismatch -0.034 [-0.21, 0.13] 17.980 [-0.21, 0.13] 64.2
High - Low Spelling Dialect Literacy - Variety Mismatch Social -0.088 [-0.24, 0.08] 11.876 [-0.24, 0.08] 78.5
Note:
ROPE range = [-0.02, 0.02]. ROPE determined at the 90% CI of the HDI.

Looking at differences split by task, the four variety conditions do not differ from one another in terms of differences in performance between participants with high and low mean nLEDs during the vocabulary testing phase. This suggests that there is no reliable difference in terms of novel word decoding between participants with high and low mean nLEDs in the vocabulary test across the variety exposure conditions.

TODO:

  • add in task and variety exposure for novel words to testing phase to the report.

  • ev_n summary needed: vocab test and variety for novel words.

  • etv_n summary needed: vocab test, task, and variety for novel words.

  • ev_n plots (regular and contrastive) needed

  • etv_n plots (regular and contrastive) needed

  • change current etv_n to t_ev_n.

  • change current etv_n to t_ev_n in the current document. This analysis is probably not needed.

  • regular variety exposure needed for novel words? Needs summary and plots.

  • work on renaming function for large tables of coefficients.

  • want it to split by mu, phi, zoi, and coi (as the greek letters)

  • Fix problem with “” not showing in tables. Probably to do with formatting of uppercase etc.